改善疾病的护理标准是关于更好的治疗方法,反过来依赖于寻找和开发新药。然而,药物发现是一个复杂且昂贵的过程。通过机器学习的方法采用了利用域固有的互连性质的药物发现知识图的创建。基于图形的数据建模,结合知识图形嵌入式提供了更直观的域表示,适用于推理任务,例如预测缺失链路。一个这样的例子将产生对给定疾病的可能相关基因的排名列表,通常被称为目标发现。因此,这是关键的,即这些预测不仅是相关的,而且是生物学上的有意义的。然而,知识图形可以直接偏向,由于集成的底层数据源,或者由于图形构造中的建模选择,其中的一个结果是某些实体可以在拓扑上超越。我们展示了知识图形嵌入模型可能受到这种结构不平衡的影响,导致无论上下文都要高度排名的密集连接实体。我们在不同的数据集,模型和预测任务中提供对此观察的支持。此外,我们展示了如何通过随机,生物学上无意义的信息扰乱图形拓扑结构以人为地改变基因的等级。这表明这种模型可能会受到实体频率而不是在关系中编码的生物学信息的影响,当实体频率不是基础数据的真实反射时,创建问题。我们的结果突出了数据建模选择的重要性,并强调了从业者在解释模型输出和知识图形组合期间时要注意这些问题。
translated by 谷歌翻译
In recent years, numerous machine learning models which attempt to solve polypharmacy side effect identification, drug-drug interaction prediction and combination therapy design tasks have been proposed. Here, we present a unified theoretical view of relational machine learning models which can address these tasks. We provide fundamental definitions, compare existing model architectures and discuss performance metrics, datasets and evaluation protocols. In addition, we emphasize possible high impact applications and important future research directions in this domain.
translated by 谷歌翻译
该药物发现​​和开发过程是一个漫长而昂贵的过程,每次药物平均耗资超过10亿美元,需要10 - 15年的时间。为了减少在整个过程中的高水平流失量,在最近十年中,越来越多地将机器学习方法应用于药物发现和发育的各个阶段,尤其是在最早鉴定可药物疾病基因的阶段。在本文中,我们开发了一种新的张量分解模型,以预测用于治疗疾病的潜在药物靶标(基因或蛋白质)。我们创建了一个三维数据张量,该数据张量由1,048个基因靶标,860个疾病和230,0111111111111111111111111111111的证据属性和临床结果,并使用从开放式目标和药物数据库中提取的数据组成。我们用从药物发现的知识图中学到的基因目标表示丰富了数据,并应用了我们提出的方法来预测看不见的基因靶标和疾病对的临床结果。我们设计了三种评估策略来衡量预测性能,并将几个常用的机器学习分类器与贝叶斯矩阵和张量分解方法进行了基准测试。结果表明,合并知识图嵌入可显着提高预测准确性,并与密集的神经网络一起训练张量分解优于所有其他基线。总而言之,我们的框架结合了两种积极研究的机器学习方法,用于疾病目标识别,即张量分解和知识图表示学习,这可能是在数据驱动的药物发现中进一步探索的有希望的途径。
translated by 谷歌翻译
药物发现和发展是一个复杂和昂贵的过程。正在研究机器学习方法,以帮助提高药物发现管道多个阶段的有效性和速度。其中,使用知识图表(kg)的那些在许多任务中具有承诺,包括药物修复,药物毒性预测和靶基因疾病优先级。在药物发现kg中,包括基因,疾病和药物在内的关键因素被认为是实体,而它们之间的关系表示相互作用。但是,为了构建高质量的KG,需要合适的数据。在这篇综述中,我们详细介绍了适用于构建聚焦KGS的药物发现的公开使用来源。我们的目标是帮助引导机器学习和kg从业者对吸毒者发现领域应用新技术,但是谁可能不熟悉相关的数据来源。通过严格的标准选择数据集,根据包含内部包含的主要信息类型,并基于可以提取的信息来进行分类以构建kg。然后,我们对现有的公共药物发现KGS进行了比较分析,并评估了文献中所选择的激励案例研究。此外,我们还提出了众多和与域及其数据集相关的众多挑战和问题,同时突出了关键的未来研究方向。我们希望本综述将激励KGS在药物发现领域的关键和新兴问题中使用。
translated by 谷歌翻译
文本分类长期以来一直是自然语言处理中的主食(NLP),其中包含跨越各种区域的应用,如情绪分析,推荐系统和垃圾邮件检测。通过如此强大的解决方案,它通常很诱人,因为当您握住锤子时,将其用作所有NLP问题的Go-tool,一切都看起来像钉子。然而,我们在这里争辩说,使用分类目前正在解决的许多任务实际上是被挖掘成一个分类模具,如果我们相反,如果我们将它们解决作为排名问题,我们不仅改善了模型,而且我们达到了更好的性能。我们提出了一种新颖的端到端排名方法,该方法包括负责产生一对文本序列的表示的变压器网络,该文本序列又传递到基于的上下文聚合网络中输出用于确定对序列的排序到序列的序列的汇总网络。有关相关性的一些概念。我们对公开可用数据集进行了多项实验,并调查使用分类常进行解决的问题的排名。在一个实验的实验中,在偏斜的情绪分析数据集中,将排名结果转换为分类标签,对最先进的文本分类产生了大约22%的改进,证明了文本在某些情况下对文本分类进行了效果。
translated by 谷歌翻译
With growing sophistication and volume of cyber attacks combined with complex network structures, it is becoming extremely difficult for security analysts to corroborate evidences to identify multistage campaigns on their network. This work develops HeAT (Heated Alert Triage): given a critical indicator of compromise (IoC), e.g., a severe IDS alert, HeAT produces a HeATed Attack Campaign (HAC) depicting the multistage activities that led up to the critical event. We define the concept of "Alert Episode Heat" to represent the analysts opinion of how much an event contributes to the attack campaign of the critical IoC given their knowledge of the network and security expertise. Leveraging a network-agnostic feature set, HeAT learns the essence of analyst's assessment of "HeAT" for a small set of IoC's, and applies the learned model to extract insightful attack campaigns for IoC's not seen before, even across networks by transferring what have been learned. We demonstrate the capabilities of HeAT with data collected in Collegiate Penetration Testing Competition (CPTC) and through collaboration with a real-world SOC. We developed HeAT-Gain metrics to demonstrate how analysts may assess and benefit from the extracted attack campaigns in comparison to common practices where IP addresses are used to corroborate evidences. Our results demonstrates the practical uses of HeAT by finding campaigns that span across diverse attack stages, remove a significant volume of irrelevant alerts, and achieve coherency to the analyst's original assessments.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Prognostication for lung cancer, a leading cause of mortality, remains a complex task, as it needs to quantify the associations of risk factors and health events spanning a patient's entire life. One challenge is that an individual's disease course involves non-terminal (e.g., disease progression) and terminal (e.g., death) events, which form semi-competing relationships. Our motivation comes from the Boston Lung Cancer Study, a large lung cancer survival cohort, which investigates how risk factors influence a patient's disease trajectory. Following developments in the prediction of time-to-event outcomes with neural networks, deep learning has become a focal area for the development of risk prediction methods in survival analysis. However, limited work has been done to predict multi-state or semi-competing risk outcomes, where a patient may experience adverse events such as disease progression prior to death. We propose a novel neural expectation-maximization algorithm to bridge the gap between classical statistical approaches and machine learning. Our algorithm enables estimation of the non-parametric baseline hazards of each state transition, risk functions of predictors, and the degree of dependence among different transitions, via a multi-task deep neural network with transition-specific sub-architectures. We apply our method to the Boston Lung Cancer Study and investigate the impact of clinical and genetic predictors on disease progression and mortality.
translated by 谷歌翻译
Self-supervised learning (SSL) aims to produce useful feature representations without access to any human-labeled data annotations. Due to the success of recent SSL methods based on contrastive learning, such as SimCLR, this problem has gained popularity. Most current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective and then discard the learned projection head after training. This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training? In this work, we first perform a systematic study on the behavior of SSL training focusing on the role of the projection head layers. By formulating the projection head as a parametric component for the InfoNCE objective rather than a part of the network, we present an alternative optimization scheme for training contrastive learning based SSL frameworks. Our experimental study on multiple image classification datasets demonstrates the effectiveness of the proposed approach over alternatives in the SSL literature.
translated by 谷歌翻译
We address the problem of unsupervised domain adaptation when the source domain differs from the target domain because of a shift in the distribution of a latent subgroup. When this subgroup confounds all observed data, neither covariate shift nor label shift assumptions apply. We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain, and unlabeled data from the target. The identification results are constructive, immediately suggesting an algorithm for estimating the optimal predictor in the target. For continuous observations, when this algorithm becomes impractical, we propose a latent variable model specific to the data generation process at hand. We show how the approach degrades as the size of the shift changes, and verify that it outperforms both covariate and label shift adjustment.
translated by 谷歌翻译